Goto

Collaborating Authors

 concept token


VisualConceptsTokenization Appendix

Neural Information Processing Systems

This is quite similar to what VCT can learn on the synthesized dataset Objects-Room. As the real-world dataset is more diverse, we observe several failure cases shown in Figure 8. We suppose those failure cases are due to VCT, trained withreconstruction loss,isnotgoodatsynthesizing counterfactual samples which arefarfromthe data distribution.





A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning

Song, Bingqing, Li, Jiaxiang, Wang, Rong, Lu, Songtao, Hong, Mingyi

arXiv.org Artificial Intelligence

Pre-trained large language models have demonstrated a strong ability to learn from context, known as in-context learning (ICL). Despite a surge of recent applications that leverage such capabilities, it is by no means clear, at least theoretically, how the ICL capabilities arise, and in particular, what is the precise role played by key factors such as pre-training procedure as well as context construction. In this work, we propose a new framework to analyze the ICL performance, for a class of realistic settings, which includes network architectures, data encoding, data generation, and prompt construction process. As a first step, we construct a simple example with a one-layer transformer, and show an interesting result, namely when the pre-train data distribution is different from the query task distribution, a properly constructed context can shift the output distribution towards the query task distribution, in a quantifiable manner, leading to accurate prediction on the query topic. We then extend the findings in the previous step to a more general case, and derive the precise relationship between ICL performance, context length and the KL divergence between pre-train and query task distribution. Finally, we provide experiments to validate our theoretical results.




Concept-SAE: Active Causal Probing of Visual Model Behavior

Ding, Jianrong, Chen, Muxi, Zhao, Chenchen, Xu, Qiang

arXiv.org Artificial Intelligence

Standard Sparse Autoencoders (SAEs) excel at discovering a dictionary of a model's learned features, offering a powerful observational lens. However, the ambiguous and ungrounded nature of these features makes them unreliable instruments for the active, causal probing of model behavior. To solve this, we introduce Concept-SAE, a framework that forges semantically grounded concept tokens through a novel hybrid disentanglement strategy. We first quantitatively demonstrate that our dual-supervision approach produces tokens that are remarkably faithful and spatially localized, outperforming alternative methods in disentanglement. This validated fidelity enables two critical applications: (1) we probe the causal link between internal concepts and predictions via direct intervention, and (2) we probe the model's failure modes by systematically localizing adversarial vulnerabilities to specific layers. Concept-SAE provides a validated blueprint for moving beyond correlational interpretation to the mechanistic, causal probing of model behavior.